We have been fantasising about artificial intelligence for a long time. This obsession materialises in some cultural masterpieces, with movies or books such as 2001: A Space Odyssey, Metropolis, Blade Runner, The Matrix, I, Robot, Westworld, and more. Most raise deep philosophical questions about human nature, but also explore the potential behaviours and ethics of artificial intelligence, usually through a rather pessimistic lens. Although they are only works of fiction, this goes to show how wary we are about our creations becoming our masters.
The democratisation of AI reached a new step when large language models emerged. But for all the praise they have received, they have rung an equivalent amount of alarm bells. We have quickly witnessed flaws inherent in these new AI models, such as hallucinations, or unethical usage including misinformation and copyright infringements, raising concerns and calls from the brightest minds in the space. Their points were that we shouldn’t enter an AI innovation race without considering the right security and ethical guardrails to mitigate the threat of AI for malicious purposes, or the creation of defective AI systems that could have strong ramifications on our society.
Conversations about regulating AI are happening worldwide, which should help foster healthy progress. Members of the EU are leading this effort, and already agreed the AI Act back in December, which is hoped to influence other regulations globally, comparable to what the GDPR did for global privacy. In November, a number of nations also signed an agreement to make security the number one priority in AI design requirements.
It is reassuring to see proactive governments starting to adopt AI legislation and regulations, but the legislative pace is such that we could still be a couple of years away from them having an actual impact on mitigating the unethical and unsafe use of the technology. In the meantime, organisations need to take the matter into their own hands. More companies than ever will have the opportunity to consume, experiment with, integrate, and develop AI systems in the upcoming months and years, and there are existing principles that should be considered and used as guidelines to do so responsibly.
- Security and privacy covers four pillars:
- Using AI securely, for example by ensuring that sensitive data is not exposed to public GenAI tools, and privacy is not jeopardised. It also means considering the ethical aspects. Some jurisdictions have started penalising companies using biased AI, which may become an AI regulation standard in the future.
- Protecting the organisation against AI attacks. I mentioned that AI is a new ecosystem for threat actors to target, and organisations should keep abreast of this and protect their system and people from the various and emerging threats.
- Building AI securely by adopting privacy by design and security by design processes. This also includes securing the environment and supply chain in which the AI is being developed.
- Protecting AI models and their training data in production, especially from threats such as data poisoning, which could make the model defective and/or biased.
- Transparency and explainability are necessary for organisations developing AI. It means that the black box decisions and outputs of the AI system should be easy to explain and demonstrate if necessary. They should also be traceable, and expected.
- Reflections around bias and fairness are also critical. Organisations developing AI models need to ensure they are built without bias and ensure their fairness in the long-term. This can be done by applying:
- Pre-processing; mitigation methods applied to the training dataset before a model is trained on it
- In-processing; mitigation techniques incorporated into the model training process itself.
- Post-processing methods work on the predictions of the model to achieve the desirable fairness.
- Inclusive collaboration means ensuring various stakeholders and teams (business, risk, legal and compliance, security, public relations, etc.) are engaged in the AI design and oversight process, and the use of AI is assessed across all areas. Having various stakeholders involved contributes to the prevention of bias, and to the quality of the outcome.
- Finally, it is essential to define ownership and accountability for each AI system in use. Whose responsibility is it to ensure that an AI tool continues to operate appropriately and who is accountable when something goes wrong? And what are the potential legal and regulatory implications for the organisation and the accountable individual(s)?
As we wait for more regulations, there will be further development in AI innovation, and these five principles should spawn a race to the top for responsible AI and AI safety which in itself is a differentiator becoming a competitive advantage.